1f1f3347dSVineet Gupta /* 2f1f3347dSVineet Gupta * TLB Management (flush/create/diagnostics) for ARC700 3f1f3347dSVineet Gupta * 4f1f3347dSVineet Gupta * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 5f1f3347dSVineet Gupta * 6f1f3347dSVineet Gupta * This program is free software; you can redistribute it and/or modify 7f1f3347dSVineet Gupta * it under the terms of the GNU General Public License version 2 as 8f1f3347dSVineet Gupta * published by the Free Software Foundation. 9d79e678dSVineet Gupta * 10d79e678dSVineet Gupta * vineetg: Aug 2011 11d79e678dSVineet Gupta * -Reintroduce duplicate PD fixup - some customer chips still have the issue 12d79e678dSVineet Gupta * 13d79e678dSVineet Gupta * vineetg: May 2011 14d79e678dSVineet Gupta * -No need to flush_cache_page( ) for each call to update_mmu_cache() 15d79e678dSVineet Gupta * some of the LMBench tests improved amazingly 16d79e678dSVineet Gupta * = page-fault thrice as fast (75 usec to 28 usec) 17d79e678dSVineet Gupta * = mmap twice as fast (9.6 msec to 4.6 msec), 18d79e678dSVineet Gupta * = fork (5.3 msec to 3.7 msec) 19d79e678dSVineet Gupta * 20d79e678dSVineet Gupta * vineetg: April 2011 : 21d79e678dSVineet Gupta * -MMU v3: PD{0,1} bits layout changed: They don't overlap anymore, 22d79e678dSVineet Gupta * helps avoid a shift when preparing PD0 from PTE 23d79e678dSVineet Gupta * 24d79e678dSVineet Gupta * vineetg: April 2011 : Preparing for MMU V3 25d79e678dSVineet Gupta * -MMU v2/v3 BCRs decoded differently 26d79e678dSVineet Gupta * -Remove TLB_SIZE hardcoding as it's variable now: 256 or 512 27d79e678dSVineet Gupta * -tlb_entry_erase( ) can be void 28d79e678dSVineet Gupta * -local_flush_tlb_range( ): 29d79e678dSVineet Gupta * = need not "ceil" @end 30d79e678dSVineet Gupta * = walks MMU only if range spans < 32 entries, as opposed to 256 31d79e678dSVineet Gupta * 32d79e678dSVineet Gupta * Vineetg: Sept 10th 2008 33d79e678dSVineet Gupta * -Changes related to MMU v2 (Rel 4.8) 34d79e678dSVineet Gupta * 35d79e678dSVineet Gupta * Vineetg: Aug 29th 2008 36d79e678dSVineet Gupta * -In TLB Flush operations (Metal Fix MMU) there is a explict command to 37d79e678dSVineet Gupta * flush Micro-TLBS. If TLB Index Reg is invalid prior to TLBIVUTLB cmd, 38d79e678dSVineet Gupta * it fails. Thus need to load it with ANY valid value before invoking 39d79e678dSVineet Gupta * TLBIVUTLB cmd 40d79e678dSVineet Gupta * 41d79e678dSVineet Gupta * Vineetg: Aug 21th 2008: 42d79e678dSVineet Gupta * -Reduced the duration of IRQ lockouts in TLB Flush routines 43d79e678dSVineet Gupta * -Multiple copies of TLB erase code seperated into a "single" function 44d79e678dSVineet Gupta * -In TLB Flush routines, interrupt disabling moved UP to retrieve ASID 45d79e678dSVineet Gupta * in interrupt-safe region. 46d79e678dSVineet Gupta * 47d79e678dSVineet Gupta * Vineetg: April 23rd Bug #93131 48d79e678dSVineet Gupta * Problem: tlb_flush_kernel_range() doesnt do anything if the range to 49d79e678dSVineet Gupta * flush is more than the size of TLB itself. 50d79e678dSVineet Gupta * 51d79e678dSVineet Gupta * Rahul Trivedi : Codito Technologies 2004 52f1f3347dSVineet Gupta */ 53f1f3347dSVineet Gupta 54f1f3347dSVineet Gupta #include <linux/module.h> 55f1f3347dSVineet Gupta #include <asm/arcregs.h> 56d79e678dSVineet Gupta #include <asm/setup.h> 57f1f3347dSVineet Gupta #include <asm/mmu_context.h> 58da1677b0SVineet Gupta #include <asm/mmu.h> 59f1f3347dSVineet Gupta 60d79e678dSVineet Gupta /* Need for ARC MMU v2 61d79e678dSVineet Gupta * 62d79e678dSVineet Gupta * ARC700 MMU-v1 had a Joint-TLB for Code and Data and is 2 way set-assoc. 63d79e678dSVineet Gupta * For a memcpy operation with 3 players (src/dst/code) such that all 3 pages 64d79e678dSVineet Gupta * map into same set, there would be contention for the 2 ways causing severe 65d79e678dSVineet Gupta * Thrashing. 66d79e678dSVineet Gupta * 67d79e678dSVineet Gupta * Although J-TLB is 2 way set assoc, ARC700 caches J-TLB into uTLBS which has 68d79e678dSVineet Gupta * much higher associativity. u-D-TLB is 8 ways, u-I-TLB is 4 ways. 69d79e678dSVineet Gupta * Given this, the thrasing problem should never happen because once the 3 70d79e678dSVineet Gupta * J-TLB entries are created (even though 3rd will knock out one of the prev 71d79e678dSVineet Gupta * two), the u-D-TLB and u-I-TLB will have what is required to accomplish memcpy 72d79e678dSVineet Gupta * 73d79e678dSVineet Gupta * Yet we still see the Thrashing because a J-TLB Write cause flush of u-TLBs. 74d79e678dSVineet Gupta * This is a simple design for keeping them in sync. So what do we do? 75d79e678dSVineet Gupta * The solution which James came up was pretty neat. It utilised the assoc 76d79e678dSVineet Gupta * of uTLBs by not invalidating always but only when absolutely necessary. 77d79e678dSVineet Gupta * 78d79e678dSVineet Gupta * - Existing TLB commands work as before 79d79e678dSVineet Gupta * - New command (TLBWriteNI) for TLB write without clearing uTLBs 80d79e678dSVineet Gupta * - New command (TLBIVUTLB) to invalidate uTLBs. 81d79e678dSVineet Gupta * 82d79e678dSVineet Gupta * The uTLBs need only be invalidated when pages are being removed from the 83d79e678dSVineet Gupta * OS page table. If a 'victim' TLB entry is being overwritten in the main TLB 84d79e678dSVineet Gupta * as a result of a miss, the removed entry is still allowed to exist in the 85d79e678dSVineet Gupta * uTLBs as it is still valid and present in the OS page table. This allows the 86d79e678dSVineet Gupta * full associativity of the uTLBs to hide the limited associativity of the main 87d79e678dSVineet Gupta * TLB. 88d79e678dSVineet Gupta * 89d79e678dSVineet Gupta * During a miss handler, the new "TLBWriteNI" command is used to load 90d79e678dSVineet Gupta * entries without clearing the uTLBs. 91d79e678dSVineet Gupta * 92d79e678dSVineet Gupta * When the OS page table is updated, TLB entries that may be associated with a 93d79e678dSVineet Gupta * removed page are removed (flushed) from the TLB using TLBWrite. In this 94d79e678dSVineet Gupta * circumstance, the uTLBs must also be cleared. This is done by using the 95d79e678dSVineet Gupta * existing TLBWrite command. An explicit IVUTLB is also required for those 96d79e678dSVineet Gupta * corner cases when TLBWrite was not executed at all because the corresp 97d79e678dSVineet Gupta * J-TLB entry got evicted/replaced. 98d79e678dSVineet Gupta */ 99d79e678dSVineet Gupta 100da1677b0SVineet Gupta 101f1f3347dSVineet Gupta /* A copy of the ASID from the PID reg is kept in asid_cache */ 102f1f3347dSVineet Gupta int asid_cache = FIRST_ASID; 103f1f3347dSVineet Gupta 104f1f3347dSVineet Gupta /* ASID to mm struct mapping. We have one extra entry corresponding to 105f1f3347dSVineet Gupta * NO_ASID to save us a compare when clearing the mm entry for old asid 106f1f3347dSVineet Gupta * see get_new_mmu_context (asm-arc/mmu_context.h) 107f1f3347dSVineet Gupta */ 108f1f3347dSVineet Gupta struct mm_struct *asid_mm_map[NUM_ASID + 1]; 109cc562d2eSVineet Gupta 110d79e678dSVineet Gupta /* 111d79e678dSVineet Gupta * Utility Routine to erase a J-TLB entry 112d79e678dSVineet Gupta * The procedure is to look it up in the MMU. If found, ERASE it by 113d79e678dSVineet Gupta * issuing a TlbWrite CMD with PD0 = PD1 = 0 114d79e678dSVineet Gupta */ 115d79e678dSVineet Gupta 116d79e678dSVineet Gupta static void __tlb_entry_erase(void) 117d79e678dSVineet Gupta { 118d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD1, 0); 119d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD0, 0); 120d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite); 121d79e678dSVineet Gupta } 122d79e678dSVineet Gupta 123d79e678dSVineet Gupta static void tlb_entry_erase(unsigned int vaddr_n_asid) 124d79e678dSVineet Gupta { 125d79e678dSVineet Gupta unsigned int idx; 126d79e678dSVineet Gupta 127d79e678dSVineet Gupta /* Locate the TLB entry for this vaddr + ASID */ 128d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD0, vaddr_n_asid); 129d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBProbe); 130d79e678dSVineet Gupta idx = read_aux_reg(ARC_REG_TLBINDEX); 131d79e678dSVineet Gupta 132d79e678dSVineet Gupta /* No error means entry found, zero it out */ 133d79e678dSVineet Gupta if (likely(!(idx & TLB_LKUP_ERR))) { 134d79e678dSVineet Gupta __tlb_entry_erase(); 135d79e678dSVineet Gupta } else { /* Some sort of Error */ 136d79e678dSVineet Gupta 137d79e678dSVineet Gupta /* Duplicate entry error */ 138d79e678dSVineet Gupta if (idx & 0x1) { 139d79e678dSVineet Gupta /* TODO we need to handle this case too */ 140d79e678dSVineet Gupta pr_emerg("unhandled Duplicate flush for %x\n", 141d79e678dSVineet Gupta vaddr_n_asid); 142d79e678dSVineet Gupta } 143d79e678dSVineet Gupta /* else entry not found so nothing to do */ 144d79e678dSVineet Gupta } 145d79e678dSVineet Gupta } 146d79e678dSVineet Gupta 147d79e678dSVineet Gupta /**************************************************************************** 148d79e678dSVineet Gupta * ARC700 MMU caches recently used J-TLB entries (RAM) as uTLBs (FLOPs) 149d79e678dSVineet Gupta * 150d79e678dSVineet Gupta * New IVUTLB cmd in MMU v2 explictly invalidates the uTLB 151d79e678dSVineet Gupta * 152d79e678dSVineet Gupta * utlb_invalidate ( ) 153d79e678dSVineet Gupta * -For v2 MMU calls Flush uTLB Cmd 154d79e678dSVineet Gupta * -For v1 MMU does nothing (except for Metal Fix v1 MMU) 155d79e678dSVineet Gupta * This is because in v1 TLBWrite itself invalidate uTLBs 156d79e678dSVineet Gupta ***************************************************************************/ 157d79e678dSVineet Gupta 158d79e678dSVineet Gupta static void utlb_invalidate(void) 159d79e678dSVineet Gupta { 160d79e678dSVineet Gupta #if (CONFIG_ARC_MMU_VER >= 2) 161d79e678dSVineet Gupta 162d79e678dSVineet Gupta #if (CONFIG_ARC_MMU_VER < 3) 163d79e678dSVineet Gupta /* MMU v2 introduced the uTLB Flush command. 164d79e678dSVineet Gupta * There was however an obscure hardware bug, where uTLB flush would 165d79e678dSVineet Gupta * fail when a prior probe for J-TLB (both totally unrelated) would 166d79e678dSVineet Gupta * return lkup err - because the entry didnt exist in MMU. 167d79e678dSVineet Gupta * The Workround was to set Index reg with some valid value, prior to 168d79e678dSVineet Gupta * flush. This was fixed in MMU v3 hence not needed any more 169d79e678dSVineet Gupta */ 170d79e678dSVineet Gupta unsigned int idx; 171d79e678dSVineet Gupta 172d79e678dSVineet Gupta /* make sure INDEX Reg is valid */ 173d79e678dSVineet Gupta idx = read_aux_reg(ARC_REG_TLBINDEX); 174d79e678dSVineet Gupta 175d79e678dSVineet Gupta /* If not write some dummy val */ 176d79e678dSVineet Gupta if (unlikely(idx & TLB_LKUP_ERR)) 177d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, 0xa); 178d79e678dSVineet Gupta #endif 179d79e678dSVineet Gupta 180d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBIVUTLB); 181d79e678dSVineet Gupta #endif 182d79e678dSVineet Gupta 183d79e678dSVineet Gupta } 184d79e678dSVineet Gupta 185d79e678dSVineet Gupta /* 186d79e678dSVineet Gupta * Un-conditionally (without lookup) erase the entire MMU contents 187d79e678dSVineet Gupta */ 188d79e678dSVineet Gupta 189d79e678dSVineet Gupta noinline void local_flush_tlb_all(void) 190d79e678dSVineet Gupta { 191d79e678dSVineet Gupta unsigned long flags; 192d79e678dSVineet Gupta unsigned int entry; 193d79e678dSVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 194d79e678dSVineet Gupta 195d79e678dSVineet Gupta local_irq_save(flags); 196d79e678dSVineet Gupta 197d79e678dSVineet Gupta /* Load PD0 and PD1 with template for a Blank Entry */ 198d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD1, 0); 199d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD0, 0); 200d79e678dSVineet Gupta 201d79e678dSVineet Gupta for (entry = 0; entry < mmu->num_tlb; entry++) { 202d79e678dSVineet Gupta /* write this entry to the TLB */ 203d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, entry); 204d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite); 205d79e678dSVineet Gupta } 206d79e678dSVineet Gupta 207d79e678dSVineet Gupta utlb_invalidate(); 208d79e678dSVineet Gupta 209d79e678dSVineet Gupta local_irq_restore(flags); 210d79e678dSVineet Gupta } 211d79e678dSVineet Gupta 212d79e678dSVineet Gupta /* 213d79e678dSVineet Gupta * Flush the entrie MM for userland. The fastest way is to move to Next ASID 214d79e678dSVineet Gupta */ 215d79e678dSVineet Gupta noinline void local_flush_tlb_mm(struct mm_struct *mm) 216d79e678dSVineet Gupta { 217d79e678dSVineet Gupta /* 218d79e678dSVineet Gupta * Small optimisation courtesy IA64 219d79e678dSVineet Gupta * flush_mm called during fork,exit,munmap etc, multiple times as well. 220d79e678dSVineet Gupta * Only for fork( ) do we need to move parent to a new MMU ctxt, 221d79e678dSVineet Gupta * all other cases are NOPs, hence this check. 222d79e678dSVineet Gupta */ 223d79e678dSVineet Gupta if (atomic_read(&mm->mm_users) == 0) 224d79e678dSVineet Gupta return; 225d79e678dSVineet Gupta 226d79e678dSVineet Gupta /* 227d79e678dSVineet Gupta * Workaround for Android weirdism: 228d79e678dSVineet Gupta * A binder VMA could end up in a task such that vma->mm != tsk->mm 229d79e678dSVineet Gupta * old code would cause h/w - s/w ASID to get out of sync 230d79e678dSVineet Gupta */ 231d79e678dSVineet Gupta if (current->mm != mm) 232d79e678dSVineet Gupta destroy_context(mm); 233d79e678dSVineet Gupta else 234d79e678dSVineet Gupta get_new_mmu_context(mm); 235d79e678dSVineet Gupta } 236d79e678dSVineet Gupta 237d79e678dSVineet Gupta /* 238d79e678dSVineet Gupta * Flush a Range of TLB entries for userland. 239d79e678dSVineet Gupta * @start is inclusive, while @end is exclusive 240d79e678dSVineet Gupta * Difference between this and Kernel Range Flush is 241d79e678dSVineet Gupta * -Here the fastest way (if range is too large) is to move to next ASID 242d79e678dSVineet Gupta * without doing any explicit Shootdown 243d79e678dSVineet Gupta * -In case of kernel Flush, entry has to be shot down explictly 244d79e678dSVineet Gupta */ 245d79e678dSVineet Gupta void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, 246d79e678dSVineet Gupta unsigned long end) 247d79e678dSVineet Gupta { 248d79e678dSVineet Gupta unsigned long flags; 249d79e678dSVineet Gupta unsigned int asid; 250d79e678dSVineet Gupta 251d79e678dSVineet Gupta /* If range @start to @end is more than 32 TLB entries deep, 252d79e678dSVineet Gupta * its better to move to a new ASID rather than searching for 253d79e678dSVineet Gupta * individual entries and then shooting them down 254d79e678dSVineet Gupta * 255d79e678dSVineet Gupta * The calc above is rough, doesn't account for unaligned parts, 256d79e678dSVineet Gupta * since this is heuristics based anyways 257d79e678dSVineet Gupta */ 258d79e678dSVineet Gupta if (unlikely((end - start) >= PAGE_SIZE * 32)) { 259d79e678dSVineet Gupta local_flush_tlb_mm(vma->vm_mm); 260d79e678dSVineet Gupta return; 261d79e678dSVineet Gupta } 262d79e678dSVineet Gupta 263d79e678dSVineet Gupta /* 264d79e678dSVineet Gupta * @start moved to page start: this alone suffices for checking 265d79e678dSVineet Gupta * loop end condition below, w/o need for aligning @end to end 266d79e678dSVineet Gupta * e.g. 2000 to 4001 will anyhow loop twice 267d79e678dSVineet Gupta */ 268d79e678dSVineet Gupta start &= PAGE_MASK; 269d79e678dSVineet Gupta 270d79e678dSVineet Gupta local_irq_save(flags); 271d79e678dSVineet Gupta asid = vma->vm_mm->context.asid; 272d79e678dSVineet Gupta 273d79e678dSVineet Gupta if (asid != NO_ASID) { 274d79e678dSVineet Gupta while (start < end) { 275d79e678dSVineet Gupta tlb_entry_erase(start | (asid & 0xff)); 276d79e678dSVineet Gupta start += PAGE_SIZE; 277d79e678dSVineet Gupta } 278d79e678dSVineet Gupta } 279d79e678dSVineet Gupta 280d79e678dSVineet Gupta utlb_invalidate(); 281d79e678dSVineet Gupta 282d79e678dSVineet Gupta local_irq_restore(flags); 283d79e678dSVineet Gupta } 284d79e678dSVineet Gupta 285d79e678dSVineet Gupta /* Flush the kernel TLB entries - vmalloc/modules (Global from MMU perspective) 286d79e678dSVineet Gupta * @start, @end interpreted as kvaddr 287d79e678dSVineet Gupta * Interestingly, shared TLB entries can also be flushed using just 288d79e678dSVineet Gupta * @start,@end alone (interpreted as user vaddr), although technically SASID 289d79e678dSVineet Gupta * is also needed. However our smart TLbProbe lookup takes care of that. 290d79e678dSVineet Gupta */ 291d79e678dSVineet Gupta void local_flush_tlb_kernel_range(unsigned long start, unsigned long end) 292d79e678dSVineet Gupta { 293d79e678dSVineet Gupta unsigned long flags; 294d79e678dSVineet Gupta 295d79e678dSVineet Gupta /* exactly same as above, except for TLB entry not taking ASID */ 296d79e678dSVineet Gupta 297d79e678dSVineet Gupta if (unlikely((end - start) >= PAGE_SIZE * 32)) { 298d79e678dSVineet Gupta local_flush_tlb_all(); 299d79e678dSVineet Gupta return; 300d79e678dSVineet Gupta } 301d79e678dSVineet Gupta 302d79e678dSVineet Gupta start &= PAGE_MASK; 303d79e678dSVineet Gupta 304d79e678dSVineet Gupta local_irq_save(flags); 305d79e678dSVineet Gupta while (start < end) { 306d79e678dSVineet Gupta tlb_entry_erase(start); 307d79e678dSVineet Gupta start += PAGE_SIZE; 308d79e678dSVineet Gupta } 309d79e678dSVineet Gupta 310d79e678dSVineet Gupta utlb_invalidate(); 311d79e678dSVineet Gupta 312d79e678dSVineet Gupta local_irq_restore(flags); 313d79e678dSVineet Gupta } 314d79e678dSVineet Gupta 315d79e678dSVineet Gupta /* 316d79e678dSVineet Gupta * Delete TLB entry in MMU for a given page (??? address) 317d79e678dSVineet Gupta * NOTE One TLB entry contains translation for single PAGE 318d79e678dSVineet Gupta */ 319d79e678dSVineet Gupta 320d79e678dSVineet Gupta void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page) 321d79e678dSVineet Gupta { 322d79e678dSVineet Gupta unsigned long flags; 323d79e678dSVineet Gupta 324d79e678dSVineet Gupta /* Note that it is critical that interrupts are DISABLED between 325d79e678dSVineet Gupta * checking the ASID and using it flush the TLB entry 326d79e678dSVineet Gupta */ 327d79e678dSVineet Gupta local_irq_save(flags); 328d79e678dSVineet Gupta 329d79e678dSVineet Gupta if (vma->vm_mm->context.asid != NO_ASID) { 330d79e678dSVineet Gupta tlb_entry_erase((page & PAGE_MASK) | 331d79e678dSVineet Gupta (vma->vm_mm->context.asid & 0xff)); 332d79e678dSVineet Gupta utlb_invalidate(); 333d79e678dSVineet Gupta } 334d79e678dSVineet Gupta 335d79e678dSVineet Gupta local_irq_restore(flags); 336d79e678dSVineet Gupta } 337cc562d2eSVineet Gupta 338cc562d2eSVineet Gupta /* 339cc562d2eSVineet Gupta * Routine to create a TLB entry 340cc562d2eSVineet Gupta */ 341cc562d2eSVineet Gupta void create_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) 342cc562d2eSVineet Gupta { 343cc562d2eSVineet Gupta unsigned long flags; 34464b703efSVineet Gupta unsigned int idx, asid_or_sasid, rwx; 345cc562d2eSVineet Gupta 346cc562d2eSVineet Gupta /* 347cc562d2eSVineet Gupta * create_tlb() assumes that current->mm == vma->mm, since 348cc562d2eSVineet Gupta * -it ASID for TLB entry is fetched from MMU ASID reg (valid for curr) 349cc562d2eSVineet Gupta * -completes the lazy write to SASID reg (again valid for curr tsk) 350cc562d2eSVineet Gupta * 351cc562d2eSVineet Gupta * Removing the assumption involves 352cc562d2eSVineet Gupta * -Using vma->mm->context{ASID,SASID}, as opposed to MMU reg. 353cc562d2eSVineet Gupta * -Fix the TLB paranoid debug code to not trigger false negatives. 354cc562d2eSVineet Gupta * -More importantly it makes this handler inconsistent with fast-path 355cc562d2eSVineet Gupta * TLB Refill handler which always deals with "current" 356cc562d2eSVineet Gupta * 357cc562d2eSVineet Gupta * Lets see the use cases when current->mm != vma->mm and we land here 358cc562d2eSVineet Gupta * 1. execve->copy_strings()->__get_user_pages->handle_mm_fault 359cc562d2eSVineet Gupta * Here VM wants to pre-install a TLB entry for user stack while 360cc562d2eSVineet Gupta * current->mm still points to pre-execve mm (hence the condition). 361cc562d2eSVineet Gupta * However the stack vaddr is soon relocated (randomization) and 362cc562d2eSVineet Gupta * move_page_tables() tries to undo that TLB entry. 363cc562d2eSVineet Gupta * Thus not creating TLB entry is not any worse. 364cc562d2eSVineet Gupta * 365cc562d2eSVineet Gupta * 2. ptrace(POKETEXT) causes a CoW - debugger(current) inserting a 366cc562d2eSVineet Gupta * breakpoint in debugged task. Not creating a TLB now is not 367cc562d2eSVineet Gupta * performance critical. 368cc562d2eSVineet Gupta * 369cc562d2eSVineet Gupta * Both the cases above are not good enough for code churn. 370cc562d2eSVineet Gupta */ 371cc562d2eSVineet Gupta if (current->active_mm != vma->vm_mm) 372cc562d2eSVineet Gupta return; 373cc562d2eSVineet Gupta 374cc562d2eSVineet Gupta local_irq_save(flags); 375cc562d2eSVineet Gupta 376cc562d2eSVineet Gupta tlb_paranoid_check(vma->vm_mm->context.asid, address); 377cc562d2eSVineet Gupta 378cc562d2eSVineet Gupta address &= PAGE_MASK; 379cc562d2eSVineet Gupta 380cc562d2eSVineet Gupta /* update this PTE credentials */ 381cc562d2eSVineet Gupta pte_val(*ptep) |= (_PAGE_PRESENT | _PAGE_ACCESSED); 382cc562d2eSVineet Gupta 383d091fcb9SVineet Gupta /* Create HW TLB(PD0,PD1) from PTE */ 384cc562d2eSVineet Gupta 385cc562d2eSVineet Gupta /* ASID for this task */ 386cc562d2eSVineet Gupta asid_or_sasid = read_aux_reg(ARC_REG_PID) & 0xff; 387cc562d2eSVineet Gupta 388d091fcb9SVineet Gupta write_aux_reg(ARC_REG_TLBPD0, address | asid_or_sasid | 389d091fcb9SVineet Gupta (pte_val(*ptep) & PTE_BITS_IN_PD0)); 390cc562d2eSVineet Gupta 39164b703efSVineet Gupta /* 39264b703efSVineet Gupta * ARC MMU provides fully orthogonal access bits for K/U mode, 39364b703efSVineet Gupta * however Linux only saves 1 set to save PTE real-estate 39464b703efSVineet Gupta * Here we convert 3 PTE bits into 6 MMU bits: 39564b703efSVineet Gupta * -Kernel only entries have Kr Kw Kx 0 0 0 39664b703efSVineet Gupta * -User entries have mirrored K and U bits 39764b703efSVineet Gupta */ 39864b703efSVineet Gupta rwx = pte_val(*ptep) & PTE_BITS_RWX; 39964b703efSVineet Gupta 40064b703efSVineet Gupta if (pte_val(*ptep) & _PAGE_GLOBAL) 40164b703efSVineet Gupta rwx <<= 3; /* r w x => Kr Kw Kx 0 0 0 */ 40264b703efSVineet Gupta else 40364b703efSVineet Gupta rwx |= (rwx << 3); /* r w x => Kr Kw Kx Ur Uw Ux */ 40464b703efSVineet Gupta 405cc562d2eSVineet Gupta /* Load remaining info in PD1 (Page Frame Addr and Kx/Kw/Kr Flags) */ 40664b703efSVineet Gupta write_aux_reg(ARC_REG_TLBPD1, 40764b703efSVineet Gupta rwx | (pte_val(*ptep) & PTE_BITS_NON_RWX_IN_PD1)); 408cc562d2eSVineet Gupta 409cc562d2eSVineet Gupta /* First verify if entry for this vaddr+ASID already exists */ 410cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBProbe); 411cc562d2eSVineet Gupta idx = read_aux_reg(ARC_REG_TLBINDEX); 412cc562d2eSVineet Gupta 413cc562d2eSVineet Gupta /* 414cc562d2eSVineet Gupta * If Not already present get a free slot from MMU. 415cc562d2eSVineet Gupta * Otherwise, Probe would have located the entry and set INDEX Reg 416cc562d2eSVineet Gupta * with existing location. This will cause Write CMD to over-write 417cc562d2eSVineet Gupta * existing entry with new PD0 and PD1 418cc562d2eSVineet Gupta */ 419cc562d2eSVineet Gupta if (likely(idx & TLB_LKUP_ERR)) 420cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBGetIndex); 421cc562d2eSVineet Gupta 422cc562d2eSVineet Gupta /* 423cc562d2eSVineet Gupta * Commit the Entry to MMU 424cc562d2eSVineet Gupta * It doesnt sound safe to use the TLBWriteNI cmd here 425cc562d2eSVineet Gupta * which doesn't flush uTLBs. I'd rather be safe than sorry. 426cc562d2eSVineet Gupta */ 427cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite); 428cc562d2eSVineet Gupta 429cc562d2eSVineet Gupta local_irq_restore(flags); 430cc562d2eSVineet Gupta } 431cc562d2eSVineet Gupta 432eacd0e95SVineet Gupta /* 433eacd0e95SVineet Gupta * Called at the end of pagefault, for a userspace mapped page 434eacd0e95SVineet Gupta * -pre-install the corresponding TLB entry into MMU 4354102b533SVineet Gupta * -Finalize the delayed D-cache flush of kernel mapping of page due to 4364102b533SVineet Gupta * flush_dcache_page(), copy_user_page() 4374102b533SVineet Gupta * 4384102b533SVineet Gupta * Note that flush (when done) involves both WBACK - so physical page is 4394102b533SVineet Gupta * in sync as well as INV - so any non-congruent aliases don't remain 440cc562d2eSVineet Gupta */ 44124603fddSVineet Gupta void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, 442cc562d2eSVineet Gupta pte_t *ptep) 443cc562d2eSVineet Gupta { 44424603fddSVineet Gupta unsigned long vaddr = vaddr_unaligned & PAGE_MASK; 4454102b533SVineet Gupta unsigned long paddr = pte_val(*ptep) & PAGE_MASK; 44629b93c68SVineet Gupta struct page *page = pfn_to_page(pte_pfn(*ptep)); 447cc562d2eSVineet Gupta 44824603fddSVineet Gupta create_tlb(vma, vaddr, ptep); 44924603fddSVineet Gupta 45029b93c68SVineet Gupta if (page == ZERO_PAGE(0)) { 45129b93c68SVineet Gupta return; 45229b93c68SVineet Gupta } 45329b93c68SVineet Gupta 4544102b533SVineet Gupta /* 4554102b533SVineet Gupta * Exec page : Independent of aliasing/page-color considerations, 4564102b533SVineet Gupta * since icache doesn't snoop dcache on ARC, any dirty 4574102b533SVineet Gupta * K-mapping of a code page needs to be wback+inv so that 4584102b533SVineet Gupta * icache fetch by userspace sees code correctly. 4594102b533SVineet Gupta * !EXEC page: If K-mapping is NOT congruent to U-mapping, flush it 4604102b533SVineet Gupta * so userspace sees the right data. 4614102b533SVineet Gupta * (Avoids the flush for Non-exec + congruent mapping case) 4624102b533SVineet Gupta */ 4633e87974dSVineet Gupta if ((vma->vm_flags & VM_EXEC) || 4643e87974dSVineet Gupta addr_not_cache_congruent(paddr, vaddr)) { 465eacd0e95SVineet Gupta 4662ed21daeSVineet Gupta int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); 467eacd0e95SVineet Gupta if (dirty) { 4684102b533SVineet Gupta /* wback + inv dcache lines */ 4696ec18a81SVineet Gupta __flush_dcache_page(paddr, paddr); 4704102b533SVineet Gupta 4714102b533SVineet Gupta /* invalidate any existing icache lines */ 4724102b533SVineet Gupta if (vma->vm_flags & VM_EXEC) 47324603fddSVineet Gupta __inv_icache_page(paddr, vaddr); 47424603fddSVineet Gupta } 475cc562d2eSVineet Gupta } 476eacd0e95SVineet Gupta } 477cc562d2eSVineet Gupta 478cc562d2eSVineet Gupta /* Read the Cache Build Confuration Registers, Decode them and save into 479cc562d2eSVineet Gupta * the cpuinfo structure for later use. 480cc562d2eSVineet Gupta * No Validation is done here, simply read/convert the BCRs 481cc562d2eSVineet Gupta */ 482ce759956SPaul Gortmaker void read_decode_mmu_bcr(void) 483cc562d2eSVineet Gupta { 484cc562d2eSVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 485da1677b0SVineet Gupta unsigned int tmp; 486da1677b0SVineet Gupta struct bcr_mmu_1_2 { 487da1677b0SVineet Gupta #ifdef CONFIG_CPU_BIG_ENDIAN 488da1677b0SVineet Gupta unsigned int ver:8, ways:4, sets:4, u_itlb:8, u_dtlb:8; 489da1677b0SVineet Gupta #else 490da1677b0SVineet Gupta unsigned int u_dtlb:8, u_itlb:8, sets:4, ways:4, ver:8; 491da1677b0SVineet Gupta #endif 492da1677b0SVineet Gupta } *mmu2; 493da1677b0SVineet Gupta 494da1677b0SVineet Gupta struct bcr_mmu_3 { 495da1677b0SVineet Gupta #ifdef CONFIG_CPU_BIG_ENDIAN 496da1677b0SVineet Gupta unsigned int ver:8, ways:4, sets:4, osm:1, reserv:3, pg_sz:4, 497da1677b0SVineet Gupta u_itlb:4, u_dtlb:4; 498da1677b0SVineet Gupta #else 499da1677b0SVineet Gupta unsigned int u_dtlb:4, u_itlb:4, pg_sz:4, reserv:3, osm:1, sets:4, 500da1677b0SVineet Gupta ways:4, ver:8; 501da1677b0SVineet Gupta #endif 502da1677b0SVineet Gupta } *mmu3; 503cc562d2eSVineet Gupta 504cc562d2eSVineet Gupta tmp = read_aux_reg(ARC_REG_MMU_BCR); 505cc562d2eSVineet Gupta mmu->ver = (tmp >> 24); 506cc562d2eSVineet Gupta 507cc562d2eSVineet Gupta if (mmu->ver <= 2) { 508cc562d2eSVineet Gupta mmu2 = (struct bcr_mmu_1_2 *)&tmp; 509cc562d2eSVineet Gupta mmu->pg_sz = PAGE_SIZE; 510cc562d2eSVineet Gupta mmu->sets = 1 << mmu2->sets; 511cc562d2eSVineet Gupta mmu->ways = 1 << mmu2->ways; 512cc562d2eSVineet Gupta mmu->u_dtlb = mmu2->u_dtlb; 513cc562d2eSVineet Gupta mmu->u_itlb = mmu2->u_itlb; 514cc562d2eSVineet Gupta } else { 515cc562d2eSVineet Gupta mmu3 = (struct bcr_mmu_3 *)&tmp; 516cc562d2eSVineet Gupta mmu->pg_sz = 512 << mmu3->pg_sz; 517cc562d2eSVineet Gupta mmu->sets = 1 << mmu3->sets; 518cc562d2eSVineet Gupta mmu->ways = 1 << mmu3->ways; 519cc562d2eSVineet Gupta mmu->u_dtlb = mmu3->u_dtlb; 520cc562d2eSVineet Gupta mmu->u_itlb = mmu3->u_itlb; 521cc562d2eSVineet Gupta } 522cc562d2eSVineet Gupta 523cc562d2eSVineet Gupta mmu->num_tlb = mmu->sets * mmu->ways; 524cc562d2eSVineet Gupta } 525cc562d2eSVineet Gupta 526af617428SVineet Gupta char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len) 527af617428SVineet Gupta { 528af617428SVineet Gupta int n = 0; 529e3edeb67SNoam Camus struct cpuinfo_arc_mmu *p_mmu = &cpuinfo_arc700[cpu_id].mmu; 530af617428SVineet Gupta 531af617428SVineet Gupta n += scnprintf(buf + n, len - n, "ARC700 MMU [v%x]\t: %dk PAGE, ", 532af617428SVineet Gupta p_mmu->ver, TO_KB(p_mmu->pg_sz)); 533af617428SVineet Gupta 534af617428SVineet Gupta n += scnprintf(buf + n, len - n, 535af617428SVineet Gupta "J-TLB %d (%dx%d), uDTLB %d, uITLB %d, %s\n", 536af617428SVineet Gupta p_mmu->num_tlb, p_mmu->sets, p_mmu->ways, 537af617428SVineet Gupta p_mmu->u_dtlb, p_mmu->u_itlb, 5388235703eSVineet Gupta IS_ENABLED(CONFIG_ARC_MMU_SASID) ? "SASID" : ""); 539af617428SVineet Gupta 540af617428SVineet Gupta return buf; 541af617428SVineet Gupta } 542af617428SVineet Gupta 543ce759956SPaul Gortmaker void arc_mmu_init(void) 544cc562d2eSVineet Gupta { 545af617428SVineet Gupta char str[256]; 546af617428SVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 547af617428SVineet Gupta 548af617428SVineet Gupta printk(arc_mmu_mumbojumbo(0, str, sizeof(str))); 549af617428SVineet Gupta 550af617428SVineet Gupta /* For efficiency sake, kernel is compile time built for a MMU ver 551af617428SVineet Gupta * This must match the hardware it is running on. 552af617428SVineet Gupta * Linux built for MMU V2, if run on MMU V1 will break down because V1 553af617428SVineet Gupta * hardware doesn't understand cmds such as WriteNI, or IVUTLB 554af617428SVineet Gupta * On the other hand, Linux built for V1 if run on MMU V2 will do 555af617428SVineet Gupta * un-needed workarounds to prevent memcpy thrashing. 556af617428SVineet Gupta * Similarly MMU V3 has new features which won't work on older MMU 557af617428SVineet Gupta */ 558af617428SVineet Gupta if (mmu->ver != CONFIG_ARC_MMU_VER) { 559af617428SVineet Gupta panic("MMU ver %d doesn't match kernel built for %d...\n", 560af617428SVineet Gupta mmu->ver, CONFIG_ARC_MMU_VER); 561af617428SVineet Gupta } 562af617428SVineet Gupta 563af617428SVineet Gupta if (mmu->pg_sz != PAGE_SIZE) 564af617428SVineet Gupta panic("MMU pg size != PAGE_SIZE (%luk)\n", TO_KB(PAGE_SIZE)); 565af617428SVineet Gupta 566cc562d2eSVineet Gupta /* 567cc562d2eSVineet Gupta * ASID mgmt data structures are compile time init 568cc562d2eSVineet Gupta * asid_cache = FIRST_ASID and asid_mm_map[] all zeroes 569cc562d2eSVineet Gupta */ 570cc562d2eSVineet Gupta 571cc562d2eSVineet Gupta local_flush_tlb_all(); 572cc562d2eSVineet Gupta 573cc562d2eSVineet Gupta /* Enable the MMU */ 574cc562d2eSVineet Gupta write_aux_reg(ARC_REG_PID, MMU_ENABLE); 57541195d23SVineet Gupta 57641195d23SVineet Gupta /* In smp we use this reg for interrupt 1 scratch */ 57741195d23SVineet Gupta #ifndef CONFIG_SMP 57841195d23SVineet Gupta /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ 57941195d23SVineet Gupta write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); 58041195d23SVineet Gupta #endif 581cc562d2eSVineet Gupta } 582cc562d2eSVineet Gupta 583cc562d2eSVineet Gupta /* 584cc562d2eSVineet Gupta * TLB Programmer's Model uses Linear Indexes: 0 to {255, 511} for 128 x {2,4} 585cc562d2eSVineet Gupta * The mapping is Column-first. 586cc562d2eSVineet Gupta * --------------------- ----------- 587cc562d2eSVineet Gupta * |way0|way1|way2|way3| |way0|way1| 588cc562d2eSVineet Gupta * --------------------- ----------- 589cc562d2eSVineet Gupta * [set0] | 0 | 1 | 2 | 3 | | 0 | 1 | 590cc562d2eSVineet Gupta * [set1] | 4 | 5 | 6 | 7 | | 2 | 3 | 591cc562d2eSVineet Gupta * ~ ~ ~ ~ 592cc562d2eSVineet Gupta * [set127] | 508| 509| 510| 511| | 254| 255| 593cc562d2eSVineet Gupta * --------------------- ----------- 594cc562d2eSVineet Gupta * For normal operations we don't(must not) care how above works since 595cc562d2eSVineet Gupta * MMU cmd getIndex(vaddr) abstracts that out. 596cc562d2eSVineet Gupta * However for walking WAYS of a SET, we need to know this 597cc562d2eSVineet Gupta */ 598cc562d2eSVineet Gupta #define SET_WAY_TO_IDX(mmu, set, way) ((set) * mmu->ways + (way)) 599cc562d2eSVineet Gupta 600cc562d2eSVineet Gupta /* Handling of Duplicate PD (TLB entry) in MMU. 601cc562d2eSVineet Gupta * -Could be due to buggy customer tapeouts or obscure kernel bugs 602cc562d2eSVineet Gupta * -MMU complaints not at the time of duplicate PD installation, but at the 603cc562d2eSVineet Gupta * time of lookup matching multiple ways. 604cc562d2eSVineet Gupta * -Ideally these should never happen - but if they do - workaround by deleting 605cc562d2eSVineet Gupta * the duplicate one. 606cc562d2eSVineet Gupta * -Knob to be verbose abt it.(TODO: hook them up to debugfs) 607cc562d2eSVineet Gupta */ 608cc562d2eSVineet Gupta volatile int dup_pd_verbose = 1;/* Be slient abt it or complain (default) */ 609cc562d2eSVineet Gupta 610cc562d2eSVineet Gupta void do_tlb_overlap_fault(unsigned long cause, unsigned long address, 611cc562d2eSVineet Gupta struct pt_regs *regs) 612cc562d2eSVineet Gupta { 613cc562d2eSVineet Gupta int set, way, n; 614cc562d2eSVineet Gupta unsigned int pd0[4], pd1[4]; /* assume max 4 ways */ 615cc562d2eSVineet Gupta unsigned long flags, is_valid; 616cc562d2eSVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 617cc562d2eSVineet Gupta 618cc562d2eSVineet Gupta local_irq_save(flags); 619cc562d2eSVineet Gupta 620cc562d2eSVineet Gupta /* re-enable the MMU */ 621cc562d2eSVineet Gupta write_aux_reg(ARC_REG_PID, MMU_ENABLE | read_aux_reg(ARC_REG_PID)); 622cc562d2eSVineet Gupta 623cc562d2eSVineet Gupta /* loop thru all sets of TLB */ 624cc562d2eSVineet Gupta for (set = 0; set < mmu->sets; set++) { 625cc562d2eSVineet Gupta 626cc562d2eSVineet Gupta /* read out all the ways of current set */ 627cc562d2eSVineet Gupta for (way = 0, is_valid = 0; way < mmu->ways; way++) { 628cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, 629cc562d2eSVineet Gupta SET_WAY_TO_IDX(mmu, set, way)); 630cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBRead); 631cc562d2eSVineet Gupta pd0[way] = read_aux_reg(ARC_REG_TLBPD0); 632cc562d2eSVineet Gupta pd1[way] = read_aux_reg(ARC_REG_TLBPD1); 633cc562d2eSVineet Gupta is_valid |= pd0[way] & _PAGE_PRESENT; 634cc562d2eSVineet Gupta } 635cc562d2eSVineet Gupta 636cc562d2eSVineet Gupta /* If all the WAYS in SET are empty, skip to next SET */ 637cc562d2eSVineet Gupta if (!is_valid) 638cc562d2eSVineet Gupta continue; 639cc562d2eSVineet Gupta 640cc562d2eSVineet Gupta /* Scan the set for duplicate ways: needs a nested loop */ 641cc562d2eSVineet Gupta for (way = 0; way < mmu->ways; way++) { 642cc562d2eSVineet Gupta if (!pd0[way]) 643cc562d2eSVineet Gupta continue; 644cc562d2eSVineet Gupta 645cc562d2eSVineet Gupta for (n = way + 1; n < mmu->ways; n++) { 646cc562d2eSVineet Gupta if ((pd0[way] & PAGE_MASK) == 647cc562d2eSVineet Gupta (pd0[n] & PAGE_MASK)) { 648cc562d2eSVineet Gupta 649cc562d2eSVineet Gupta if (dup_pd_verbose) { 650cc562d2eSVineet Gupta pr_info("Duplicate PD's @" 651cc562d2eSVineet Gupta "[%d:%d]/[%d:%d]\n", 652cc562d2eSVineet Gupta set, way, set, n); 653cc562d2eSVineet Gupta pr_info("TLBPD0[%u]: %08x\n", 654cc562d2eSVineet Gupta way, pd0[way]); 655cc562d2eSVineet Gupta } 656cc562d2eSVineet Gupta 657cc562d2eSVineet Gupta /* 658cc562d2eSVineet Gupta * clear entry @way and not @n. This is 659cc562d2eSVineet Gupta * critical to our optimised loop 660cc562d2eSVineet Gupta */ 661cc562d2eSVineet Gupta pd0[way] = pd1[way] = 0; 662cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, 663cc562d2eSVineet Gupta SET_WAY_TO_IDX(mmu, set, way)); 664cc562d2eSVineet Gupta __tlb_entry_erase(); 665cc562d2eSVineet Gupta } 666cc562d2eSVineet Gupta } 667cc562d2eSVineet Gupta } 668cc562d2eSVineet Gupta } 669cc562d2eSVineet Gupta 670cc562d2eSVineet Gupta local_irq_restore(flags); 671cc562d2eSVineet Gupta } 672cc562d2eSVineet Gupta 673cc562d2eSVineet Gupta /*********************************************************************** 674cc562d2eSVineet Gupta * Diagnostic Routines 675cc562d2eSVineet Gupta * -Called from Low Level TLB Hanlders if things don;t look good 676cc562d2eSVineet Gupta **********************************************************************/ 677cc562d2eSVineet Gupta 678cc562d2eSVineet Gupta #ifdef CONFIG_ARC_DBG_TLB_PARANOIA 679cc562d2eSVineet Gupta 680cc562d2eSVineet Gupta /* 681cc562d2eSVineet Gupta * Low Level ASM TLB handler calls this if it finds that HW and SW ASIDS 682cc562d2eSVineet Gupta * don't match 683cc562d2eSVineet Gupta */ 684cc562d2eSVineet Gupta void print_asid_mismatch(int is_fast_path) 685cc562d2eSVineet Gupta { 686cc562d2eSVineet Gupta int pid_sw, pid_hw; 687cc562d2eSVineet Gupta pid_sw = current->active_mm->context.asid; 688cc562d2eSVineet Gupta pid_hw = read_aux_reg(ARC_REG_PID) & 0xff; 689cc562d2eSVineet Gupta 690cc562d2eSVineet Gupta pr_emerg("ASID Mismatch in %s Path Handler: sw-pid=0x%x hw-pid=0x%x\n", 691cc562d2eSVineet Gupta is_fast_path ? "Fast" : "Slow", pid_sw, pid_hw); 692cc562d2eSVineet Gupta 693cc562d2eSVineet Gupta __asm__ __volatile__("flag 1"); 694cc562d2eSVineet Gupta } 695cc562d2eSVineet Gupta 696cc562d2eSVineet Gupta void tlb_paranoid_check(unsigned int pid_sw, unsigned long addr) 697cc562d2eSVineet Gupta { 698cc562d2eSVineet Gupta unsigned int pid_hw; 699cc562d2eSVineet Gupta 700cc562d2eSVineet Gupta pid_hw = read_aux_reg(ARC_REG_PID) & 0xff; 701cc562d2eSVineet Gupta 702cc562d2eSVineet Gupta if (addr < 0x70000000 && ((pid_hw != pid_sw) || (pid_sw == NO_ASID))) 703cc562d2eSVineet Gupta print_asid_mismatch(0); 704cc562d2eSVineet Gupta } 705cc562d2eSVineet Gupta #endif 706