会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 22. 发明授权
    • Memory allocation technique for maintaining an even distribution of
cache page addresses within a data structure
    • 用于维持数据结构中高速缓存页地址均匀分布的内存分配技术
    • US6016529A
    • 2000-01-18
    • US978940
    • 1997-11-26
    • Larry William Woodman
    • Larry William Woodman
    • G06F12/08G06F12/10G06F12/12G06F17/30
    • G06F12/10G06F12/0864G06F12/121G06F2212/653Y10S707/99953
    • In a computer system, a data structure is provided in memory for storing one or more data files from an external device. The data files stored in the data structure are accessible by a number of processes executing in the computer system. The computer system includes a storage device such as a cache for storing data from a subset of pages of the memory. Each of the pages of the cache is referred to as a cache page, having an associated cache page address. A physical address is allocated for storing each page of a retrieved data file stored in the data structure such that a cache page address portion of the physical address is selected from the available cache page addresses. The physical address is further selected such that the cache page addresses are substantially evenly distributed amongst the pages of the retrieved data file and the data structure in order to minimize thrashing in the cache and enhance performance.
    • 在计算机系统中,在存储器中提供数据结构,用于存储来自外部设备的一个或多个数据文件。 存储在数据结构中的数据文件可通过在计算机系统中执行的多个进程来访问。 计算机系统包括诸如用于从存储器的页面的子集存储数据的高速缓存的存储设备。 高速缓存的每个页面被称为具有关联的高速缓存页地址的高速缓存页。 分配物理地址用于存储存储在数据结构中的检索数据文件的每一页,使得从可用的高速缓存页地址中选择物理地址的高速缓存页地址部分。 进一步选择物理地址,使得高速缓存页地址基本均匀地分布在检索的数据文件和数据结构的页面中,以便最小化缓存中的抖动并增强性能。
    • 23. 发明授权
    • Method and apparatus for cache miss reduction by simulating cache
associativity
    • 通过模拟缓存关联性来缓存未命中减少的方法和装置
    • US5442571A
    • 1995-08-15
    • US250315
    • 1994-05-27
    • Richard L. Sites
    • Richard L. Sites
    • G06F12/08G06F12/10
    • G06F12/0864G06F12/1054G06F2212/653
    • A computer system using virtual memory addressing and having a direct-mapped cache is operated in a manner to simulate the effect of a set associative cache by detecting cache misses and remapping pages in the main memory so that memory references which would have caused thrashing can instead coexist in the cache. Two memory addresses which are in different pages but which map to the same location in the cache may not reside in the direct-mapped cache at the same time, so alternate reference to these addresses by a task executing on the CPU would cause thrashing. However, if the location of one of these addresses in main memory is changed, the data items having these addresses can coexist in the cache, and performance will be markedly improved because thrashing will no longer result. For a CPU executing a virtual memory operating system, a page of data or instructions can be moved to a different physical page frame but remain the same virtual address. This is accomplished by simply updating the page-mapping tables to reflect the new physical location of the page, and copying the data from the old page frame to the new one. The thrashing condition is detected and corrected dynamically by latching cache miss addresses and periodically sampling the latch, then remapping pages containing the addresses found upon sampling. The direct-mapped cache must be large enough to hold two or more pages.
    • 使用虚拟存储器寻址并具有直接映射高速缓存的计算机系统以通过检测主存储器中的高速缓存未命中和重新映射页面来模拟集合关联高速缓存的效果的方式操作,使得可能导致thrashing的存储器引用可以改为 在缓存中并存。 位于不同页面但映射到高速缓存中相同位置的两个内存地址可能不会同时驻留在直接映射高速缓存中,因此在CPU上执行的任务对这些地址的备用引用将导致颠覆。 然而,如果主存储器中的这些地址之一的位置改变,则具有这些地址的数据项可以在高速缓存中共存,并且性能将被显着改善,因为不再产生抖动。 对于执行虚拟存储器操作系统的CPU,可以将一页数据或指令移动到不同的物理页框,但保持相同的虚拟地址。 这是通过简单地更新页面映射表来反映页面的新的物理位置,并将数据从旧页面框架复制到新的页面框架来实现的。 通过锁存缓存未命中地址并周期性地对锁存器进行采样来检测和纠正抖动条件,然后重新映射包含在采样时找到的地址的页面。 直接映射缓存必须足够大以容纳两个或多个页面。
    • 26. 发明授权
    • Page coloring with color inheritance for memory pages
    • 页面着色与内存页面的颜色继承
    • US09158710B2
    • 2015-10-13
    • US11513984
    • 2006-08-31
    • Uday Savagaonkar
    • Uday Savagaonkar
    • G06F12/14G06F12/10
    • G06F12/1458G06F12/1009G06F12/145G06F2212/653
    • Apparatuses, methods, and media for page coloring with color inheritance for memory pages are disclosed. Some embodiments may include an interface to access a memory and a paging unit including translation logic, inheritance logic, and comparison logic. The translation logic translates a first address to a second address based on an entry in a data structure, wherein the first address is provided by an instruction stored in a first page in the memory and the entry includes a base address of a second page in the memory including the second address and a color of the second page. The inheritance logic may determine an effective current page color of the first page based on a color of the first page. The comparison logic may compare the effective current page color of the first page to the color of the second page. Other embodiments are disclosed and claimed.
    • 公开了用于内存页面的具有彩色继承的页面着色的装置,方法和媒体。 一些实施例可以包括访问存储器的接口和包括翻译逻辑,继承逻辑和比较逻辑的寻呼单元。 翻译逻辑基于数据结构中的条目将第一地址转换为第二地址,其中第一地址由存储在存储器中的第一页中的指令提供,并且条目包括第二地址的基地址 存储器,其包括第二地址和第二页的颜色。 继承逻辑可以基于第一页的颜色来确定第一页的有效的当前页面颜色。 比较逻辑可以将第一页的有效当前页面颜色与第二页面的颜色进行比较。 公开和要求保护其他实施例。
    • 27. 发明授权
    • Cache index coloring for virtual-address dynamic allocators
    • 虚拟地址动态分配器的缓存索引着色
    • US08707006B2
    • 2014-04-22
    • US12899493
    • 2010-10-06
    • David Dice
    • David Dice
    • G06F12/00G06F13/00G06F13/28
    • G06F12/0223G06F12/023G06F12/0802G06F12/10G06F2212/653G06F2212/657
    • A method for managing a memory, including obtaining a number of indices and a cache line size of a cache memory, computing a cache page size by multiplying the number of indices by the cache line size, calculating a greatest common denominator (GCD) of the cache page size and a first size class, incrementing, in response to the GCD of the cache page size and the first size class exceeding the cache line size, the first size class to generate an updated first size class, calculating a GCD of the cache page size and the updated first size class, creating, in response to the GCD of the cache page size and the updated first size class being less than the cache line size, a first superblock in the memory including a first plurality of blocks of the updated first size class, and creating a second superblock in the memory.
    • 一种用于管理存储器的方法,包括获得多个索引和高速缓存存储器的高速缓存行大小,通过将索引数乘以高速缓存行大小来计算高速缓存页大小,计算最大公分母(GCD) 缓存页面大小和第一大小类别,响应于高速缓存页大小的GCD和超过高速缓存行大小的第一大小类而递增,生成更新的第一大小类的第一大小类,计算高速缓存的GCD 页面大小和更新的第一大小类别,响应于缓存页面大小的GCD和更新的第一大小类别小于高速缓存行大小,创建存储器中的第一超级块,其包括更新的第一大小块 第一大小类,并在内存中创建第二个超级块。
    • 29. 发明授权
    • Enhanced cache operation with remapping of pages for optimizing data
relocation from addresses causing cache misses
    • 增强的缓存操作,重新映射页面,以优化从导致高速缓存未命中的地址的数据重定位
    • US5630097A
    • 1997-05-13
    • US178487
    • 1994-01-07
    • David A. OrbitsKenneth D. AbramsonH. Bruce Butts, Jr.
    • David A. OrbitsKenneth D. AbramsonH. Bruce Butts, Jr.
    • G06F12/08G06F12/10G06F12/02
    • G06F12/10G06F12/0864G06F2212/653
    • A computer system executing virtual memory management and having a cache is operated in a manner to reduce cache misses by remapping pages of physical memory from which cache misses are detected. The method includes detecting cache misses, as by observing cache fill operations on the system bus, and then remapping the pages in the main memory which contain the addresses of the most frequent cache misses, so that memory references causing thrashing can then coexist in different pages of the cache. For a CPU executing a virtual memory operating system, a page of data or instructions can be moved to a different physical page frame but remain at the same virtual address, by simply updating the page-mapping tables to reflect the new physical location of the page, and copying the data from the old page frame to the new one. The primary feature of the invention is to add bus activity sampling logic to the CPU and enhance the operating system to allow the operating system to detect when cache thrashing is occurring and remap data pages to new physical memory locations to eliminate the cache thrashing situation.
    • 执行虚拟存储器管理并具有高速缓存的计算机系统以通过重新映射检测到高速缓存未命中的物理存储器的页面来减少高速缓存未命中的方式操作。 该方法包括检测高速缓存未命中,如通过观察系统总线上的高速缓存填充操作,然后重新映射主存储器中包含最频繁高速缓存未命中的地址的页面,使得引起颠簸的存储器引用可以在不同的页面中共存 的缓存。 对于执行虚拟存储器操作系统的CPU,可以通过简单地更新页面映射表来反映页面的新的物理位置,将一页数据或指令移动到不同的物理页面帧,但保持在相同的虚拟地址 ,并将数据从旧页框复制到新页。 本发明的主要特征是将总线活动采样逻辑添加到CPU并增强操作系统以允许操作系统检测何时发生高速缓存抖动,并将数据页重新映射到新的物理存储器位置以消除高速缓存抖动情况。
    • 30. 发明授权
    • Method and system for cache memory congruence class management in a data
processing system
    • 数据处理系统中高速缓存存储器一致性类管理的方法和系统
    • US5410663A
    • 1995-04-25
    • US962436
    • 1992-10-15
    • Robert A. BlackburnKeith N. LangstonPeter G. Sutton
    • Robert A. BlackburnKeith N. LangstonPeter G. Sutton
    • G06F12/10G06F12/03
    • G06F12/1045G06F2212/653
    • A method and system for cache memory congruence class management in a data processing system. A selected address within a data processing system will typically have a single real address, but may have multiple virtual addresses within multiple virtual address spaces in a multi-tasking system, each virtual address space including a segment index, a page index and a byte index. A memory cache may be utilized to improve processor performance by hashing a portion of each virtual memory address to an address within a congruence class in the cache; however, when the cache contains a greater number of congruence classes than the number of different byte index addresses the virtual memory addresses of a single real memory address may hash to different congruence classes, reducing the ability of the processor to rapidly locate data within the cache. The method and system prevents this problem by first determining whether or not a virtual memory address exists within any virtual memory space in the system which corresponds to a selected address in real memory, in response to a request for a virtual memory address corresponding to that selected address. If such a virtual memory address already exists, a new virtual memory address is assigned such that the new virtual memory address will hash to the same congruence class as the existing virtual memory address, greatly enhancing the processor's efficiency at retrieving data within the cache. In the event no existing virtual memory address within the data processing system corresponds to the selected address a virtual memory address may be arbitrarily assigned.
    • 一种数据处理系统中缓存存储器一致性管理的方法和系统。 数据处理系统内的选定地址通常具有单个实际地址,但在多任务系统中可能具有多个虚拟地址空间内的多个虚拟地址,每个虚拟地址空间包括段索引,页索引和字节索引 。 可以利用存储器缓存来通过将每个虚拟存储器地址的一部分散列到高速缓存中的同余类中的地址来提高处理器性能; 然而,当缓存包含比不同字节索引地址的数量更多的一致类时,单个实际存储器地址的虚拟存储器地址可以散列到不同的一致类,从而降低处理器快速定位高速缓存内的数据的能力 。 该方法和系统响应于对与所选择的虚拟存储器地址的对应的请求,首先确定虚拟存储器地址是否存在于系统中对应于实际存储器中的所选地址的任何虚拟存储器空间内,从而防止该问题 地址。 如果这样的虚拟存储器地址已经存在,则分配新的虚拟存储器地址,使得新的虚拟存储器地址将散列到与现有虚拟存储器地址相同的同余类,从而大大提高了处理器在高速缓存中检索数据时的效率。 在数据处理系统内没有现有的虚拟存储器地址对应于所选择的地址的情况下,可以任意地分配虚拟存储器地址。