会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 52. 发明授权
    • Apparatus and method for pre-fetching data to cached memory using persistent historical page table data
    • 使用持久性历史页表数据预取数据到缓存存储器的装置和方法
    • US07099999B2
    • 2006-08-29
    • US10675732
    • 2003-09-30
    • David Arnold Luick
    • David Arnold Luick
    • G06F12/00G06F12/08G06F12/10
    • G06F12/0862G06F2212/6024G06F2212/654
    • A computer system includes a main memory, at least one processor, and at least one level of cache. The system maintains reference history data with respect to each addressable page in memory, preferably in a page table. The reference history data is preferably used to determine which cacheable sub-units of the page should be pre-fetched to the cache. The reference history data is preferably an up or down counter which is incremented if the cacheable sub-unit is loaded into cache and is referenced by the processor, and decremented if the sub-unit is loaded into cache and is not referenced before being cast out. The reference counter thus expresses an approximate likelihood, based on recent history, that the sub-unit will be referenced in the near future.
    • 计算机系统包括主存储器,至少一个处理器和至少一个级别的高速缓存。 该系统相对于存储器中的每个可寻址页面优选地在页表中维护参考历史数据。 参考历史数据优选地用于确定页面的哪些可高速缓存的子单元应该被预取到高速缓存。 参考历史数据优选地是一个向上或向下计数器,如果可高速缓存的子单元被加载到高速缓存中并被处理器引用,则其递增,并且如果子单元被加载到高速缓存中并且在被抛出之前未被引用则递减 。 因此,参考计数器基于最近的历史来表示近似可能性,即在不久的将来将引用子单元。
    • 55. 发明授权
    • Zero delay data cache effective address generation
    • 零延迟数据缓存有效地址生成
    • US06941421B2
    • 2005-09-06
    • US10282519
    • 2002-10-29
    • David Arnold Luick
    • David Arnold Luick
    • G06F12/08G06F9/355G06F12/00G06F12/02
    • G06F9/355G06F12/0802
    • A method and system for accessing a specified cache line using previously decoded base address offset bits, stored with a register file, which eliminate the need to perform a full address decode in the cache access path, and to replace the address generation adder multiple level logic with only one level of rotator/multiplexer logic. The decoded base register offset bits enable the direct selection of the specified cache line, thus negating the need for the addition and the decoding of the base register offset bits at each access to the cache memory. Other cache lines are accessed by rotating the decoded base address offset bits, resulting in a selection of another cache word line.
    • 一种使用先前解码的基地址偏移位访问指定高速缓存行的方法和系统,其存储有寄存器文件,其消除了在高速缓存访​​问路径中执行全地址解码的需要,并且替换地址生成加法器多级逻辑 只有一级旋转器/多路复用器逻辑。 解码的基址寄存器偏移位使得能够直接选择指定的高速缓存行,从而不需要在对高速缓冲存储器的每次访问时对基址寄存器偏移位的加法和解码。 通过旋转解码的基地址偏移位来访问其他高速缓存行,导致另一高速缓存字线的选择。
    • 57. 发明授权
    • Instruction pair detection and pseudo ports for cache array
    • 缓存阵列的指令对检测和伪端口
    • US06763421B2
    • 2004-07-13
    • US09975405
    • 2001-10-11
    • David Arnold Luick
    • David Arnold Luick
    • G06F1200
    • G06F9/30043G06F9/3824G06F9/3885G06F12/0848
    • Embodiments are provided in which a first and second instructions are executed in parallel. A first and a second address are generated according to the first and second instructions, respectively. The first address is used to select a data cache line of a data cache RAM and a first data bank from the data cache line. The second address is used to select a second data bank from the data cache. The first and second data banks are outputted in parallel from the data cache RAM. An instruction pair testing circuit tests the probability of the first and second instructions accessing a same data cache line of the data cache RAM. If it is unlikely that the two instructions will access a same data cache line, the second instruction is refetched and re-executed, and the second data bank is not used.
    • 提供了其中并行执行第一和第二指令的实施例。 分别根据第一和第二指令生成第一和第二地址。 第一地址用于从数据高速缓存行中选择数据高速缓存RAM和第一数据库的数据高速缓存行。 第二个地址用于从数据高速缓存中选择第二个数据库。 从数据高速缓存RAM并行地输出第一和第二数据组。 指令对测试电路测试第一和第二指令访问数据高速缓存RAM的相同数据高速缓存行的概率。 如果两个指令不太可能访问相同的数据高速缓存行,则第二指令被重新执行并重新执行,并且不使用第二数据库。
    • 59. 发明申请
    • Method and Apparatus for Multiple Load Instruction Execution
    • 多重加载指令执行的方法和装置
    • US20090006818A1
    • 2009-01-01
    • US11769271
    • 2007-06-27
    • David Arnold Luick
    • David Arnold Luick
    • G06F9/30
    • G06F9/30043G06F9/382G06F9/3824G06F9/3853G06F9/3857G06F9/3859G06F9/3869G06F9/3889
    • A method and apparatus for executing instructions. The method includes receiving a first load instruction and a second load instruction. The method also includes issuing the first load instruction and the second load instruction to a cascaded delayed execution pipeline unit having at least a first execution pipeline and a second execution pipeline, wherein the second execution pipeline executes an instruction in a common issue group in a delayed manner relative to another instruction in the common issue group executed in the first execution pipeline. The method also includes accessing a cache by executing the first load instruction and the second load instruction. A delay between execution of the first load instruction and the second load instruction allows the cache to complete the access with the first load instruction before beginning the access with the second load instruction.
    • 一种用于执行指令的方法和装置。 该方法包括接收第一加载指令和第二加载指令。 该方法还包括向具有至少第一执行流水线和第二执行流水线的级联延迟执行流水线单元发出第一加载指令和第二加载指令,其中第二执行流水线在延迟的第一执行流水线中执行指令, 方式相对于在第一执行管线中执行的共同问题组中的另一指令。 该方法还包括通过执行第一加载指令和第二加载指令来访问高速缓存。 执行第一加载指令和第二加载指令之间的延迟允许高速缓存在开始使用第二加载指令的访问之前用第一加载指令完成访问。
    • 60. 发明申请
    • L2 Cache/Nest Address Translation
    • L2缓存/巢地址转换
    • US20090006803A1
    • 2009-01-01
    • US11769978
    • 2007-06-28
    • David Arnold Luick
    • David Arnold Luick
    • G06F9/26
    • G06F12/0897G06F12/1045
    • A method and apparatus for accessing cache memory in a processor. The method includes accessing requested data in one or more level one caches of the processor using requested effective addresses of the requested data. If the one or more level one caches of the processor do not contain requested data corresponding to the requested effective addresses, the requested effective addresses are translated to real addresses. A lookaside buffer includes a corresponding entry for each cache line in each of the one or more level one caches of the processor. The corresponding entry indicates a translation from the effective addresses to the real addresses for the cache line. The translated real addresses are used to access a level two cache.
    • 一种用于在处理器中访问高速缓冲存储器的方法和装置。 该方法包括使用所请求数据的所请求的有效地址访问处理器的一个或多个一级缓存中的请求数据。 如果处理器的一个或多个一级缓存不包含对应于所请求的有效地址的请求数据,则所请求的有效地址被转换为实际地址。 后视缓冲器包括处理器的一个或多个一级高速缓存中的每一个中的每个高速缓存行的相应条目。 相应的条目表示从有效地址到高速缓存行的实际地址的转换。 翻译的实际地址用于访问二级缓存。